Skip to content

Conversation

@Shah621
Copy link

@Shah621 Shah621 commented Dec 15, 2024

Surrogate Extraction Fidelity Results for GNN Models

This document presents the fidelity results of surrogate models when varying the number of attack nodes for three popular datasets: Cora, Citeseer, and PubMed.

Results Summary

Dataset: Cora

Number of Attack Nodes Fidelity (%)
140 85.38%
280 85.23%
420 83.57%
560 83.16%
700 83.12%

Dataset: Citeseer

Number of Attack Nodes Fidelity (%)
140 78.21%
280 80.13%
420 78.48%
560 78.36%
700 79.35%

Dataset: PubMed

Number of Attack Nodes Fidelity (%)
140 88.22%
280 88.13%
420 88.91%
560 89.21%
700 88.71%

Analysis

  • Cora shows a slight decline in fidelity as the number of attack nodes increases.
  • Citeseer exhibits variability, with fidelity peaking at 280 attack nodes.
  • PubMed maintains consistently high fidelity, with a slight increase as the number of attack nodes increases to 560.

These results suggest that the PubMed dataset is more robust to surrogate extraction attacks compared to Cora and Citeseer.

Conclusion

The fidelity of surrogate models depends on the dataset characteristics and the number of attack nodes. The results can help in understanding the robustness of Graph Neural Networks (GNNs) to surrogate extraction attacks.

Sparshkhare1306 pushed a commit to Sparshkhare1306/PyGIP-backup that referenced this pull request Oct 1, 2025
Refactor DFEAAttack initialization and evaluation methods
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant